Goto

Collaborating Authors

 case repository


OneKE: A Dockerized Schema-Guided LLM Agent-based Knowledge Extraction System

Luo, Yujie, Ru, Xiangyuan, Liu, Kangwei, Yuan, Lin, Sun, Mengshu, Zhang, Ningyu, Liang, Lei, Zhang, Zhiqiang, Zhou, Jun, Wei, Lanning, Zheng, Da, Wang, Haofen, Chen, Huajun

arXiv.org Artificial Intelligence

We introduce OneKE, a dockerized schema-guided knowledge extraction system, which can extract knowledge from the Web and raw PDF Books, and support various domains (science, news, etc.). Specifically, we design OneKE with multiple agents and a configure knowledge base. Different agents perform their respective roles, enabling support for various extraction scenarios. The configure knowledge base facilitates schema configuration, error case debugging and correction, further improving the performance. Empirical evaluations on benchmark datasets demonstrate OneKE's efficacy, while case studies further elucidate its adaptability to diverse tasks across multiple domains, highlighting its potential for broad applications. We have open-sourced the Code at https://github.com/zjunlp/OneKE and released a Video at http://oneke.openkg.cn/demo.mp4.


Case Repositories: Towards Case-Based Reasoning for AI Alignment

Feng, K. J. Kevin, Chen, Quan Ze, Cheong, Inyoung, Xia, King, Zhang, Amy X.

arXiv.org Artificial Intelligence

Case studies commonly form the pedagogical backbone in law, ethics, and many other domains that face complex and ambiguous societal questions informed by human values. Similar complexities and ambiguities arise when we consider how AI should be aligned in practice: when faced with vast quantities of diverse (and sometimes conflicting) values from different individuals and communities, with whose values is AI to align, and how should AI do so? We propose a complementary approach to constitutional AI alignment, grounded in ideas from case-based reasoning (CBR), that focuses on the construction of policies through judgments on a set of cases. We present a process to assemble such a case repository by: 1) gathering a set of ``seed'' cases -- questions one may ask an AI system -- in a particular domain, 2) eliciting domain-specific key dimensions for cases through workshops with domain experts, 3) using LLMs to generate variations of cases not seen in the wild, and 4) engaging with the public to judge and improve cases. We then discuss how such a case repository could assist in AI alignment, both through directly acting as precedents to ground acceptable behaviors, and as a medium for individuals and communities to engage in moral reasoning around AI.